本文描述了(r)ules(o)f(t)he(r)oad(a)dvisor,该代理提供了推荐的和可能从一组人级规则生成的动作。我们以形式和示例描述了Rotra的架构和设计。具体来说,我们使用Rotra正式化和实施英国“道路规则”,并描述如何将其纳入自动驾驶汽车中,从而可以内部推荐遵守道路规则。此外,根据《英国公路法典》(《道路规则》),规定规则是否必须采取行动,或者仅建议采取行动,以指示生成的可能的措施。利用该系统的好处包括能够适应不同司法管辖区的不同法规;允许从规则到行为的清晰可追溯性,并提供外部自动责任机制,可以检查在某些给定情况下是否遵守规则。通过具体的示例,对自动驾驶汽车的模拟显示如何通过将自动驾驶汽车放置在许多情况下,这些场景测试了汽车遵守道路规则的能力。合并该系统的自动驾驶汽车能够确保他们遵守道路和外部(法律或监管机构的规则透明工作,从而使汽车公司,司法管辖区和公众之间的信任更大。
translated by 谷歌翻译
事实证明,加固学习(RL)的自适应课程有效地制定了稳健的火车和测试环境之间的差异。最近,无监督的环境设计(UED)框架通用RL课程以生成整个环境的序列,从而带来了具有强大的Minimax遗憾属性的新方法。在问题上,在部分观察或随机设置中,最佳策略可能取决于预期部署设置中环境的基本真相分布,而课程学习一定会改变培训分布。我们将这种现象形式化为课程诱导的协变量转移(CICS),并描述了其在核心参数中的发生如何导致次优政策。直接从基本真相分布中采样这些参数可以避免问题,但阻碍了课程学习。我们提出了Samplr,这是一种Minimax遗憾的方法,即使由于CICS偏向基础培训数据,它也优化了基础真相函数。我们证明并验证了具有挑战性的领域,我们的方法在基础上的分布下保留了最佳性,同时促进了整个环境环境的鲁棒性。
translated by 谷歌翻译
变形金刚已成为主要的机器学习工作负载,它们不仅是自然语言处理任务的事实上的标准,而且还将部署在其他领域,例如视觉和语音识别。许多基于变压器的应用程序都是实时系统,例如机器翻译和Web搜索。这些实时系统通常具有严格的端到端推理潜伏期需求。不幸的是,尽管大多数变压器计算都来自基质乘法,但变压器还包括几种非线性组件,它们在推理过程中倾向于成为瓶颈。在这项工作中,我们加快了张量流处理器上BERT模型的推断。通过小心地将所有非线性组件与矩阵乘法组件融合在一起,我们能够有效地利用芯片矩阵乘法单元,从而通过BERT-1通过BERT-1通过BERT-BASE,确定性的尾巴延迟为130 $ \ MU $ s,比当前的最新时间快6倍。
translated by 谷歌翻译
传统上,文本简化被视为单语翻译任务,其中源文本及其简化的对应物之间的句子是对齐的。但是,尤其是对于更长的输入文档,总结文本(或完全删除相关内容)在简化过程中起重要作用,目前在现有数据集中尚未反映出该过程。同时,非英语语言的资源通常很少,并且对于培训新解决方案而言是过分的。为了解决这个问题,我们对可以共同总结和简化长源文档的系统提出了核心要求。我们进一步描述了基于德国Wikipedia和德国儿童词典“ Klexikon”的新数据集的创建,用于简化和摘要,包括近2900个文档。我们发布了一个与文档一致的版本,特别突出了摘要方面,并提供了统计证据,表明此资源也非常适合简化。代码和数据可在GitHub上找到:https://github.com/dennlinger/klexikon
translated by 谷歌翻译
嵌入或可视化临床患者数据的主要挑战是可变类型的异质性,包括连续实验室值,分类诊断代码以及缺失或不完整的数据。特别地,在EHR数据中,一些变量是{\ EM缺失而不是随机(MNAR)}但故意没有收集,因此是信息来源。例如,在疑似诊断的基础上,某些患者可能认为实验室测试是必要的,但不适用于其他患者。在这里,我们呈现壁画林 - 一个无监督的随机森林,用于代表具有不同变量类型的数据(例如,分类,连续,mnar)。壁画森林由一组决策树组成,其中随机选择节点分裂变量,使得所有其他变量的边缘熵由分裂最小化。这允许我们在与连续变量一致的方式中也拆分在Mnar变量和离散变量上。最终目标是学习使用这些患者之间的平均树距离的患者的壁画嵌入。这些距离可以馈送到非线性维度减少方法,如phate,以获得可视化的嵌入。虽然这种方法在连续值的数据集中普遍存在(如单细胞RNA测序)中,但它们尚未在混合可变数据中广泛使用。我们展示在一个人工和两个临床数据集上使用我们的方法。我们表明,使用我们的方法,我们可以比竞争方法更准确地对数据进行可视化和分类数据。最后,我们表明壁画也可用于通过最近提出的树木切片的Wassersein距离比较患者的群组。
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
The visual dimension of cities has been a fundamental subject in urban studies, since the pioneering work of scholars such as Sitte, Lynch, Arnheim, and Jacobs. Several decades later, big data and artificial intelligence (AI) are revolutionizing how people move, sense, and interact with cities. This paper reviews the literature on the appearance and function of cities to illustrate how visual information has been used to understand them. A conceptual framework, Urban Visual Intelligence, is introduced to systematically elaborate on how new image data sources and AI techniques are reshaping the way researchers perceive and measure cities, enabling the study of the physical environment and its interactions with socioeconomic environments at various scales. The paper argues that these new approaches enable researchers to revisit the classic urban theories and themes, and potentially help cities create environments that are more in line with human behaviors and aspirations in the digital age.
translated by 谷歌翻译